30 research outputs found

    Hard Problems on Random Graphs

    Get PDF
    Many graph properties are expressible in first order logic. Whether a graph contains a clique or a dominating set of size k are two examples. For the solution size as its parameter the first one is W[1]-complete and the second one W[2]-complete meaning that both of them are hard problems in the worst-case. If we look at both problem from the aspect of average-case complexity, the picture changes. Clique can be solved in expected FPT time on uniformly distributed graphs of size n, while this is not clear for Dominating Set. We show that it is indeed unlikely that Dominating Set can be solved efficiently on random graphs: If yes, then every first-order expressible graph property can be solved in expected FPT time, too. Furthermore, this remains true when we consider random graphs with an arbitrary constant edge probability. We identify a very simple problem on random matrices that is equally hard to solve on average: Given a square boolean matrix, are there k rows whose logical AND is the zero vector? The related Even Set problem on the other hand turns out to be efficiently solvable on random instances, while it is known to be hard in the worst-case

    The Online Simple Knapsack Problem with Reservation and Removability

    Get PDF
    In the online simple knapsack problem, a knapsack of unit size 1 is given and an algorithm is tasked to fill it using a set of items that are revealed one after another. Each item must be accepted or rejected at the time they are presented, and these decisions are irrevocable. No prior knowledge about the set and sequence of items is given. The goal is then to maximize the sum of the sizes of all packed items compared to an optimal packing of all items of the sequence. In this paper, we combine two existing variants of the problem that each extend the range of possible actions for a newly presented item by a new option. The first is removability, in which an item that was previously packed into the knapsack may be finally discarded at any point. The second is reservations, which allows the algorithm to delay the decision on accepting or rejecting a new item indefinitely for a proportional fee relative to the size of the given item. If both removability and reservations are permitted, we show that the competitive ratio of the online simple knapsack problem rises depending on the relative reservation costs. As soon as any nonzero fee has to be paid for a reservation, no online algorithm can be better than 1.5-competitive. With rising reservation costs, this competitive ratio increases up to the golden ratio (? ? 1.618) that is reached for relative reservation costs of 1-?5/3 ? 0.254. We provide a matching upper and lower bound for relative reservation costs up to this value. From this point onward, the tight bound by Iwama and Taketomi for the removable knapsack problem is the best possible competitive ratio, not using any reservations

    Delaying Decisions and Reservation Costs

    Full text link
    We study the Feedback Vertex Set and the Vertex Cover problem in a natural variant of the classical online model that allows for delayed decisions and reservations. Both problems can be characterized by an obstruction set of subgraphs that the online graph needs to avoid. In the case of the Vertex Cover problem, the obstruction set consists of an edge (i.e., the graph of two adjacent vertices), while for the Feedback Vertex Set problem, the obstruction set contains all cycles. In the delayed-decision model, an algorithm needs to maintain a valid partial solution after every request, thus allowing it to postpone decisions until the current partial solution is no longer valid for the current request. The reservation model grants an online algorithm the new and additional option to pay a so-called reservation cost for any given element in order to delay the decision of adding or rejecting it until the end of the instance. For the Feedback Vertex Set problem, we first analyze the variant with only delayed decisions, proving a lower bound of 44 and an upper bound of 55 on the competitive ratio. Then we look at the variant with both delayed decisions and reservation. We show that given bounds on the competitive ratio of a problem with delayed decisions impliy lower and upper bounds for the same problem when adding the option of reservations. This observation allows us to give a lower bound of min{1+3α,4}\min{\{1+3\alpha,4\}} and an upper bound of min{1+5α,5}\min{\{1+5\alpha,5\}} for the Feedback Vertex Set problem. Finally, we show that the online Vertex Cover problem, when both delayed decisions and reservations are allowed, is min{1+2α,2}\min{\{1+2\alpha, 2\}}-competitive, where αR0\alpha \in \mathbb{R}_{\geq 0} is the reservation cost per reserved vertex.Comment: 14 Pages, submitte

    CK2 Phosphorylation of Schistosoma mansoni HMGB1 Protein Regulates Its Cellular Traffic and Secretion but Not Its DNA Transactions

    Get PDF
    parasite resides in mesenteric veins where fecundated female worms lay hundred of eggs daily. Some of the egg antigens are trapped in the liver and induce a vigorous granulomatous response. High Mobility Group Box 1 (HMGB1), a nuclear factor, can also be secreted and act as a cytokine. Schistosome HMGB1 (SmHMGB1) is secreted by the eggs and stimulate the production of key cytokines involved in the pathology of schistosomiasis. Thus, understanding the mechanism of SmHMGB1 release becomes mandatory. Here, we addressed the question of how the nuclear SmHMGB1 can reach the extracellular space. eggs of infected animals and that SmHMGB1 that were localized in the periovular schistosomotic granuloma were phosphorylated.We showed that secretion of SmHMGB1 is regulated by phosphorylation. Moreover, our results suggest that egg-secreted SmHMGB1 may represent a new egg antigen. Therefore, the identification of drugs that specifically target phosphorylation of SmHMGB1 might block its secretion and interfere with the pathogenesis of schistosomiasis

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Online Simple Knapsack with Reservation Costs

    No full text
    In the Online Simple Knapsack Problem we are given a knapsack of unit size 1. Items of size smaller or equal to 1 are presented in an iterative way and an algorithm has to decide whether to permanently include or reject each item into the knapsack without any knowledge about the rest of the instance. The goal is then to pack the knapsack as full as possible. In this work we introduce a third option additional to those of packing and rejecting an item, namely that of reserving an item for the cost of a fixed fraction α\alpha of its size. An algorithm may pay this fraction in order to postpone its decision on whether to include or reject the item until after the last item of the instance was presented. We find that adding the possibility of reservation makes the problem constantly competitive with varying competitive ratios depending on the value of α\alpha. We give upper and lower bounds for the whole range of reservation costs, with tight bounds for costs up to 1/61/6 -- an area that is strictly 2-competitive, for costs between 21\sqrt{2}-1 and 11 -- an area that is strictly (2+α) (2+\alpha)-competitive up to ϕ1\phi -1, and strictly 1/(1α)1/(1-\alpha)-competitive above ϕ1\phi-1, where ϕ\phi is the golden ratio. We find a counterintuitive characteristic of the problem: Intuitively, one may expect that the possibility of rejecting items becomes more helpful for an online algorithm with growing reservation costs. However, for higher reservation costs above 21\sqrt{2}-1, an algorithm that is unable to reject any items tightly matches the lower bound and is thus the best possible. On the other hand, for any positive reservation cost smaller than 1/61/6, any algorithm that is unable to reject items performs considerably worse than one that is able to reject

    Online Simple Knapsack with Reservation Costs

    Get PDF
    In the Online Simple Knapsack Problem we are given a knapsack of unit size 1. Items of size smaller or equal to 1 are presented in an iterative fashion and an algorithm has to decide whether to permanently reject or include each item into the knapsack without any knowledge about the rest of the instance. The goal is then to pack the knapsack as full as possible. In this work, we introduce a third option additional to those of packing and rejecting an item, namely that of reserving an item for the cost of a fixed fraction α of its size. An algorithm may pay this fraction in order to postpone its decision on whether to include or reject the item until after the last item of the instance was presented. While the classical Online Simple Knapsack Problem does not admit any constantly bounded competitive ratio in the deterministic setting, we find that adding the possibility of reservation makes the problem constantly competitive, with varying competitive ratios depending on the value of α. We give upper and lower bounds for the whole range of reservation costs, with tight bounds for costs up to 1/6 - an area that is strictly 2-competitive - , for costs between √2-1 and 1 - an area that is strictly (2+α)-competitive up to ϕ -1, and strictly 1/(1-α)-competitive above ϕ-1, where ϕ is the golden ratio. With our analysis, we find a counterintuitive characteristic of the problem: Intuitively, one would expect that the possibility of rejecting items becomes more and more helpful for an online algorithm with growing reservation costs. However, for higher reservation costs above √2-1, an algorithm that is unable to reject any items tightly matches the lower bound and is thus the best possible. On the other hand, for any positive reservation cost smaller than 1/6, any algorithm that is unable to reject any items performs considerably worse than one that is able to reject.ISSN:1868-896
    corecore